Low-rank modeling has many important applications in computer vision andmachine learning. While the matrix rank is often approximated by the convexnuclear norm, the use of nonconvex low-rank regularizers has demonstratedbetter empirical performance. However, the resulting optimization problem ismuch more challenging. Recent state-of-the-art requires an expensive full SVDin each iteration. In this paper, we show that for many commonly-used nonconvexlow-rank regularizers, a cutoff can be derived to automatically threshold thesingular values obtained from the proximal operator. This allows such operatorbeing efficiently approximated by power method. Based on it, we develop aproximal gradient algorithm (and its accelerated variant) with inexact proximalsplitting and prove that a convergence rate of O(1/T) where T is the number ofiterations is guaranteed. Furthermore, we show the proposed algorithm can bewell parallelized, which achieves nearly linear speedup w.r.t the number ofthreads. Extensive experiments are performed on matrix completion and robustprincipal component analysis, which shows a significant speedup over thestate-of-the-art. Moreover, the matrix solution obtained is more accurate andhas a lower rank than that of the nuclear norm regularizer.
展开▼